12 research outputs found

    Co-non-solvency: Mean-field polymer theory does not describe polymer collapse transition in a mixture of two competing good solvents

    Full text link
    Smart polymers are a modern class of polymeric materials that often exhibit unpredictable behavior in mixtures of solvents. One such phenomenon is co-non-solvency. Co-non-solvency occurs when two (perfectly) miscible and competing good solvents, for a given polymer, are mixed together. As a result, the same polymer collapses into a compact globule within intermediate mixing ratios. More interestingly, polymer collapses when the solvent quality remains good and even gets increasingly better by the addition of the better cosolvent. This is a puzzling phenomenon that is driven by strong local concentration fluctuations. Because of the discrete particle based nature of the interactions, Flory-Huggins type mean field arguments become unsuitable. In this work, we extend the analysis of the co-non-solvency effect presented earlier [Nature Communications 5, 4882 (2014)]. We explain why co-non-solvency is a generic phenomenon that can be understood by the thermodynamic treatment of the competitive displacement of (co)solvent components. This competition can result in a polymer collapse upon improvement of the solvent quality. Specific chemical details are not required to understand these complex conformational transitions. Therefore, a broad range of polymers are expected to exhibit similar reentrant coil-globule-coil transitions in competing good solvents

    Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    Full text link
    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach which is a combination of an heterogeneity sensitive spatial domain decomposition with an \textit{a priori} rearrangement of subdomain-walls. Within this approach, the theoretical modeling and scaling-laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase separated binary Lennard-Jones fluid.Comment: 14 pages, 12 figure

    Equilibration of High Molecular-Weight Polymer Melts: A Hierarchical Strategy

    Full text link
    A strategy is developed for generating equilibrated high molecular-weight polymer melts described with microscopic detail by sequentially backmapping coarse-grained (CG) configurations. The microscopic test model is generic but retains features like hard excluded volume interactions and realistic melt densities. The microscopic representation is mapped onto a model of soft spheres with fluctuating size, where each sphere represents a microscopic subchain with NbN_{\rm b} monomers. By varying NbN_{\rm b} a hierarchy of CG representations at different resolutions is obtained. Within this hierarchy, CG configurations equilibrated with Monte Carlo at low resolution are sequentially fine-grained into CG melts described with higher resolution. A Molecular Dynamics scheme is employed to slowly introduce the microscopic details into the latter. All backmapping steps involve only local polymer relaxation thus the computational efficiency of the scheme is independent of molecular weight, being just proportional to system size. To demonstrate the robustness of the approach, microscopic configurations containing up to n=1000n=1000 chains with polymerization degrees N=2000N=2000 are generated and equilibration is confirmed by monitoring key structural and conformational properties. The extension to much longer chains or branched polymers is straightforward

    One size fits all: equilibrating chemically different polymer liquids through universal long-wavelength description

    Full text link
    Mesoscale behavior of polymers is frequently described by universal laws. This physical property motivates us to propose a new modeling concept, grouping polymers into classes with a common long-wavelength representation. In the same class samples of different materials can be generated from this representation, encoded in a single library system. We focus on homopolymer melts, grouped according to the invariant degree of polymerization. They are described with a bead-spring model, varying chain stiffness and density to mimic chemical diversity. In a renormalization group-like fashion library samples provide a universal blob-based description, hierarchically backmapped to create configurations of other class-members. Thus large systems with experimentally-relevant invariant degree of polymerizations (so far accessible only on very coarse-grained level) can be microscopically described. Equilibration is verified comparing conformations and melt structure with smaller scale conventional simulations

    Hierarchical modeling of polystyrene melts: From soft blobs to atomistic resolution

    Get PDF
    We demonstrate that hierarchical backmapping strategies incorporating generic blob-based models can equilibrate melts of high-molecular-weight polymers, described with chemically specific, atomistic, models. The central idea behind these strategies, is first to represent polymers by chains of large soft blobs (spheres) and efficiently equilibrate the melt on mesoscopic scale. Then, the degrees of freedom of more detailed models are reinserted step by step. The procedure terminates when the atomistic description is reached. Reinsertions are feasible computationally because the fine-grained melt must be re-equilibrated only locally. To develop the method, we choose a polymer with sufficient complexity. We consider polystyrene (PS), characterized by stereochemistry and bulky side groups. Our backmapping strategy bridges mesoscopic and atomistic scales by incorporating a blob-based, a moderately CG, and a united-atom model of PS. We demonstrate that the generic blob-based model can be parameterized to reproduce the mesoscale properties of a specific polymer -- here PS. The moderately CG model captures stereochemistry. To perform backmapping we improve and adjust several fine-graining techniques. We prove equilibration of backmapped PS melts by comparing their structural and conformational properties with reference data from smaller systems, equilibrated with less efficient methods.Comment: 18 page

    Code modernization strategies for short-range non-bonded molecular dynamics simulations

    Full text link
    Modern HPC systems are increasingly relying on greater core counts and wider vector registers. Thus, applications need to be adapted to fully utilize these hardware capabilities. One class of applications that can benefit from this increase in parallelism are molecular dynamics simulations. In this paper, we describe our efforts at modernizing the ESPResSo++ molecular dynamics simulation package by restructuring its particle data layout for efficient memory accesses and applying vectorization techniques to benefit the calculation of short-range non-bonded forces, which results in an overall three times speedup and serves as a baseline for further optimizations. We also implement fine-grained parallelism for multi-core CPUs through HPX, a C++ runtime system which uses lightweight threads and an asynchronous many-task approach to maximize concurrency. Our goal is to evaluate the performance of an HPX-based approach compared to the bulk-synchronous MPI-based implementation. This requires the introduction of an additional layer to the domain decomposition scheme that defines the task granularity. On spatially inhomogeneous systems, which impose a corresponding load-imbalance in traditional MPI-based approaches, we demonstrate that by choosing an optimal task size, the efficient work-stealing mechanisms of HPX can overcome the overhead of communication resulting in an overall 1.4 times speedup compared to the baseline MPI version.Comment: 24 pages, 9 figure

    ESPResSo++: A modern multiscale simulation package for soft matter systems

    No full text
    The redesigned Extensible Simulation Package for Research on Soft matter systems (ESPResSo++) is a free, open-source, parallelized, object-oriented simulation package designed to perform many-particle simulations, principally molecular dynamics and Monte Carlo, of condensed soft matter systems. In addition to the standard simulation methods found in well-established packages, ESPResSo++ provides the ability to perform Adaptive Resolution Scheme (AdResS) simulations which are multiscale simulations of molecular systems where the level of resolution of each molecule can change on-the-fly. With the main design objective being extensibility, the software features a highly modular C++ kernel that is coupled to a Python user interface. This makes it easy to add new algorithms, setup a simulation , perform online analysis, use complex workflows and steer a simulation. The extreme flexibility of the software allows for the study of a wide range of systems. The modular structure enables scientists to use ESPResSo++ as a research platform for their own methodological developments, which at the same time allows the software to grow and acquire the most modern methods. ESPResSo++ is targeted for a broad range of architectures and is licensed under the GNU General Public License
    corecore